skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Herrmann, Felix J"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. We develop a semiamortized variational inference (VI) framework designed for computationally feasible uncertainty quantification in full-waveform inversion to explore the multimodal posterior distribution without dimensionality reduction. The framework is called full-waveform VI via subsurface extensions with refinements (WISER). WISER builds on top of a supervised generative artificial intelligence method that performs approximate amortized inference that is low-cost albeit showing an amortization gap. This gap is closed through nonamortized refinements that make frugal use of wave physics. Case studies illustrate that WISER is capable of full-resolution, computationally feasible, and reliable uncertainty estimates of velocity models and imaged reflectivities. 
    more » « less
    Free, publicly-accessible full text available March 1, 2026
  2. Abstract Due to their uncertainty quantification, Bayesian solutions to inverse problems are the framework of choice in applications that are risk averse. These benefits come at the cost of computations that are in general, intractable. New advances in machine learning and variational inference (VI) have lowered this computational barrier by leveraging data-driven learning. Two VI paradigms have emerged that represent different tradeoffs: amortized and non-amortized. Amortized VI can produce fast results but due to generalizing to many observed datasets it produces suboptimal inference results. Non-amortized VI is slower at inference but finds better posterior approximations since it is specialized towards a single observed dataset. Current amortized VI techniques run into a sub-optimality wall that cannot be improved without more expressive neural networks or extra training data. We present a solution that enables iterative improvement of amortized posteriors that uses the same networks architectures and training data. The benefits of our method requires extra computations but these remain frugal since they are based on physics-hybrid methods and summary statistics. Importantly, these computations remain mostly offline thus our method maintains cheap and reusable online evaluation while bridging the optimality gap between these two paradigms. We denote our proposed methodASPIRE-Amortized posteriors withSummaries that arePhysics-based andIterativelyREfined. We first validate our method on a stylized problem with a known posterior then demonstrate its practical use on a high-dimensional and nonlinear transcranial medical imaging problem with ultrasound. Compared with the baseline and previous methods in the literature, ASPIRE stands out as an computationally efficient and high-fidelity method for posterior inference. 
    more » « less
    Free, publicly-accessible full text available March 14, 2026
  3. We introduce a probabilistic technique for full-waveform inversion, using variational inference and conditional normalizing flows to quantify uncertainty in migration-velocity models and its impact on imaging. Our approach integrates generative artificial intelligence with physics-informed common-image gathers, reducing reliance on accurate initial velocity models. Considered case studies demonstrate its efficacy producing realizations of migration-velocity models conditioned by the data. These models are used to quantify amplitude and positioning effects during subsequent imaging. 
    more » « less
  4. Normalizing flows is a density estimation method that provides efficient exact likelihood estimation and sampling (Dinh et al., 2014) from high-dimensional distributions. This method depends on the use of the change of variables formula, which requires an invertible transform. Thus normalizing flow architectures are built to be invertible by design (Dinh et al., 2014). In theory, the invertibility of architectures constrains the expressiveness, but the use of coupling layers allows normalizing flows to exploit the power of arbitrary neural networks, which do not need to be invertible, (Dinh et al., 2016) and layer invertibility means that, if properly implemented, many layers can be stacked to increase expressiveness without creating a training memory bottleneck. The package we present, InvertibleNetworks.jl, is a pure Julia (Bezanson et al., 2017) imple- mentation of normalizing flows. We have implemented many relevant neural network layers, including GLOW 1x1 invertible convolutions (Kingma & Dhariwal, 2018), affine/additive coupling layers (Dinh et al., 2014), Haar wavelet multiscale transforms (Haar, 1909), and Hierarchical invertible neural transport (HINT) (Kruse et al., 2021), among others. These modular layers can be easily composed and modified to create different types of normalizing flows. As starting points, we have implemented RealNVP, GLOW, HINT, Hyperbolic networks (Lensink et al., 2022) and their conditional counterparts for users to quickly implement their individual applications. 
    more » « less
  5. The industry is experiencing significant changes due to artificial intelligence (AI) and the challenges of the energy transition. While some view these changes as threats, recent advances in AI offer unique opportunities, especially in the context of “digital twins” for subsurface monitoring and control. 
    more » « less
  6. Modern-day reservoir management and monitoring of geologic carbon storage increasingly call for costly time-lapse seismic data collection. We demonstrate how techniques from graph theory can be used to optimize acquisition geometries for low-cost sparse 4D seismic data. Based on midpoint-offset-domain connectivity arguments, our algorithm automatically produces sparse nonreplicated time-lapse acquisition geometries that favor wavefield recovery. 
    more » « less
  7. Geologic carbon storage represents one of the few truly scalable technologies capable of reducing the CO 2 concentration in the atmosphere. While this technology has the potential to scale, its success hinges on our ability to mitigate its risks. An important aspect of risk mitigation concerns assurances that the injected CO 2 remains within the storage complex. Among the different monitoring modalities, seismic imaging stands out due to its ability to attain high-resolution and high-fidelity images. However, these superior features come at prohibitive costs and time-intensive efforts that potentially render extensive seismic monitoring undesirable. To overcome this shortcoming, we present a methodology in which time-lapse images are created by inverting nonreplicated time-lapse monitoring data jointly. By no longer insisting on replication of the surveys to obtain high-fidelity time-lapse images and differences, extreme costs and time-consuming labor are averted. To demonstrate our approach, hundreds of realistic synthetic noisy time-lapse seismic data sets are simulated that contain imprints of regular CO 2 plumes and irregular plumes that leak. These time-lapse data sets are subsequently inverted to produce time-lapse difference images that are used to train a deep neural classifier. The testing results show that the classifier is capable of detecting CO 2 leakage automatically on unseen data with reasonable accuracy. We consider the use of this classifier as a first step in the development of an automatic workflow designed to handle the large number of continuously monitored CO 2 injection sites needed to help combat climate change. 
    more » « less
  8. We present the Seismic Laboratory for Imaging and Modeling/Monitoring open-source software framework for computational geophysics and, more generally, inverse problems involving the wave equation (e.g., seismic and medical ultrasound), regularization with learned priors, and learned neural surrogates for multiphase flow simulations. By integrating multiple layers of abstraction, the software is designed to be both readable and scalable, allowing researchers to easily formulate problems in an abstract fashion while exploiting the latest developments in high-performance computing. The design principles and their benefits are illustrated and demonstrated by means of building a scalable prototype for permeability inversion from time-lapse crosswell seismic data, which, aside from coupling of wave physics and multiphase flow, involves machine learning. 
    more » « less